154 research outputs found

    Can Punctured Rate-1/2 Turbo Codes Achieve a Lower Error Floor than their Rate-1/3 Parent Codes?

    Full text link
    In this paper we concentrate on rate-1/3 systematic parallel concatenated convolutional codes and their rate-1/2 punctured child codes. Assuming maximum-likelihood decoding over an additive white Gaussian channel, we demonstrate that a rate-1/2 non-systematic child code can exhibit a lower error floor than that of its rate-1/3 parent code, if a particular condition is met. However, assuming iterative decoding, convergence of the non-systematic code towards low bit-error rates is problematic. To alleviate this problem, we propose rate-1/2 partially-systematic codes that can still achieve a lower error floor than that of their rate-1/3 parent codes. Results obtained from extrinsic information transfer charts and simulations support our conclusion.Comment: 5 pages, 7 figures, Proceedings of the 2006 IEEE Information Theory Workshop, Chengdu, China, October 22-26, 200

    Code development for the computational analysis of crack propagation in structures

    Get PDF
    In this study, the main objective was the creation of a code, which gives the capability to a Finite Element Analysis Program with no built-in crack study tools, to study the propagation of a crack, in a cracked surface. For this purpose, the Finite Element Program FEMAP 11.3.2 with solver the NX NASTRAN has been used, and the proposed code was created, using the Application Program Interface (API) of the program. The Linear Elastic Fracture Mechanics (LEFM) theory has been applied to the code, and can predict, if the crack will propagate, the trajectory of the crack, as well as the number of cycle loads required for the propagation of the crack, for given boundary conditions and loads. Finally, the Stress Intensity Factors (SIF) produced by the program, were compared with results from analytical method. Also, experimental results have been used, for the verification of the results of the trajectory of the propagation, and the cycle loads.           &nbsp

    URLLC with coded massive MIMO via random linear codes and GRAND

    Get PDF
    A present challenge in wireless communications is the assurance of ultra-reliable and low-latency communication (URLLC). While the reliability aspect is well known to be improved by channel coding with long codewords, this usually implies using interleavers, which introduce undesirable delay. Using short codewords is a needed change to minimizing the decoding delay. This work proposes the combination of a coding and decoding scheme to be used along with spatial signal processing as a means to provide URLLC over a fading channel. The paper advocates the use of random linear codes (RLCs) over a massive MIMO (mMIMO) channel with standard zero-forcing detection and guessing random additive noise decoding (GRAND). The performance of several schemes is assessed over a mMIMO flat fading channel. The proposed scheme greatly outperforms the equivalent scheme using 5G’s polar encoding and decoding for signal-to-noise ratios (SNR) of interest. While the complexity of the polar code is constant at all SNRs, using RLCs with GRAND achieves much faster decoding times for most of the SNR range, further reducing latency.info:eu-repo/semantics/acceptedVersio

    Symbol-level GRAND for high-order modulation over block fading channels

    Get PDF
    Guessing random additive noise decoding (GRAND) is a noise-centric decoding method, which is suitable for low-latency communications, as it supports error correction codes that generate short codewords. GRAND estimates transmitted codewords by guessing the error patterns that altered them during transmission. The guessing process requires the testing of error patterns that are arranged in increasing order of Hamming weight. This approach is fitting for binary transmission over additive white Gaussian noise channels. This letter considers transmission of coded and modulated data over block fading channels and proposes a more computationally efficient variant of GRAND, which leverages information on the modulation scheme and the fading channel. In the core of the proposed variant, referred to as symbol-level GRAND, is an expression that approximately computes the probability of occurrence of an error pattern and determines the order with which error patterns are tested. Analysis and simulation results demonstrate that symbol-level GRAND produces estimates of the transmitted codewords faster than the original GRAND at the cost of a small increase in memory requirements. IEE

    A spatiotemporal Data Envelopment Analysis (S-T DEA) approach:the need to assess evolving units

    Get PDF
    One of the major challenges in measuring efficiency in terms of resources and outcomes is the assessment of the evolution of units over time. Although Data Envelopment Analysis (DEA) has been applied for time series datasets, DEA models, by construction, form the reference set for inefficient units (lambda values) based on their distance from the efficient frontier, that is, in a spatial manner. However, when dealing with temporal datasets, the proximity in time between units should also be taken into account, since it reflects the structural resemblance among time periods of a unit that evolves. In this paper, we propose a two-stage spatiotemporal DEA approach, which captures both the spatial and temporal dimension through a multi-objective programming model. In the first stage, DEA is solved iteratively extracting for each unit only previous DMUs as peers in its reference set. In the second stage, the lambda values derived from the first stage are fed to a Multiobjective Mixed Integer Linear Programming model, which filters peers in the reference set based on weights assigned to the spatial and temporal dimension. The approach is demonstrated on a real-world example drawn from software development

    Automated office blood pressure measurements in primary care are misleading in more than one third of treated hypertensives: The VALENTINE-Greece Home Blood Pressure Monitoring study

    Get PDF
    Abstract Background This study assessed the diagnostic reliability of automated office blood pressure (OBP) measurements in treated hypertensive patients in primary care by evaluating the prevalence of white coat hypertension (WCH) and masked uncontrolled hypertension (MUCH) phenomena. Methods Primary care physicians, nationwide in Greece, assessed consecutive hypertensive patients on stable treatment using OBP (1 visit, triplicate measurements) and home blood pressure (HBP) measurements (7 days, duplicate morning and evening measurements). All measurements were performed using validated automated devices with bluetooth capacity (Omron M7 Intelli-IT). Uncontrolled OBP was defined as ≥140/90 mmHg, and uncontrolled HBP was defined as ≥135/85 mmHg. Results A total of 790 patients recruited by 135 doctors were analyzed (age: 64.5 ± 14.4 years, diabetics: 21.4%, smokers: 20.6%, and average number of antihypertensive drugs: 1.6 ± 0.8). OBP (137.5 ± 9.4/84.3 ± 7.7 mmHg, systolic/diastolic) was higher than HBP (130.6 ± 11.2/79.9 ± 8 mmHg; difference 6.9 ± 11.6/4.4 ± 7.6 mmHg, p Conclusions In primary care, automated OBP measurements are misleading in approximately 40% of treated hypertensive patients. HBP monitoring is mandatory to avoid overtreatment of subjects with WCH phenomenon and prevent undertreatment and subsequent excess cardiovascular disease in MUCH
    • …
    corecore